Goto

Collaborating Authors

 francesca rossi


AAAI presidential panel – AI and sustainability

AIHub

The Future of AI Research report, published in March 2025, aims to clearly identify the trajectory of AI research in a structured way. The report was led by outgoing AAAI President Francesca Rossi and covers 17 different AI topics . Members of the report team, and other selected AI practitioners, are taking part in a series of video panel discussions covering selected chapters from the report. In the fourth panel, the AI experts tackle the topic of AI and sustainability, exploring the critical balance between harnessing AI's potential and managing its environmental impact. They talk about: the growth of AI and its impact on infrastructure, looking beyond energy use, AI for accelerating breakthroughs, and strategies for investing in grid capacity and innovations.


Report on the future of AI research

AIHub

Image taken from the front cover of the Future of AI Research report. The Association for the Advancement of Artificial Intelligence (AAAI), has published a report on the Future of AI Research. The report, which was announced by outgoing AAAI President Francesca Rossi during the AAAI 2025 conference, covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. The report is the result of a Presidential Panel, chaired by Francesca Rossi, and comprising of 24 experienced AI researchers, who worked on the project between summer 2024 and spring 2025. As well as the views of the panel members, the report also draws on community feedback, which was received from 475 AI researchers via a survey.


2024 digest of digests

AIHub

So much has happened in the AI space over the course of the past 12 months. We've reported on some of the larger, and lesser-covered, stories in our regular monthly digests. We look back through the archives and pick out one story from each of our digests. Interview with Bo Li: A comprehensive assessment of trustworthiness in GPT models Bo Li and colleagues won an outstanding datasets and benchmark track award at NeurIPS 2023 for their work DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. In this interview, Bo told us about the research, the team's methodology, and key findings. AAAI 2024 takes place This month saw the running of the 38th Annual AAAI Conference.


Interview with Francesca Rossi – talking sustainable development goals, AI regulation, and AI ethics

AIHub

At the International Joint Conference on Artificial Intelligence (IJCAI) I was lucky enough to catch up with Francesca Rossi, IBM fellow and AI Ethics Global Leader, and President of AAAI. There were so many questions I wanted to ask, and we covered some pressing topics in AI today. Andrea Rafai: My first question concerns the UN Sustainable Development Goals (SDGs). It seems that there is a lot of potential for using AI in helping to work towards the 17 goals. What is your view on these goals and the long-term outlook?


Diverse perspectives are critical for ethical AI

#artificialintelligence

It is widely acknowledged that Ada Lovelace, born in the 19th century, was the first computer programmer but her contributions and that of many other brilliant women in science and technology have been erased and attributed to men. Even today, diverse voices in the male-dominated tech industry are overlooked and non-traditional backgrounds dismissed as "not techy enough". During a recent podcast series in collaboration with IBM, I had the opportunity to meet amazing women from multi-disciplinary backgrounds in key AI and technology roles at the tech giant. Their non-typical backgrounds give them an unique ability to identify opportunities as well as ethical gaps in AI that would be otherwise missed. Here are some highlights from our wide-ranging conversations that remind us of the critical importance of non-traditional backgrounds in technology.



Ethics of AI Surveillance Tech -- Conversation with Francesca Rossi

#artificialintelligence

This week, we launched our new series of interviews with brilliant women working in the critical space of AI Ethics and discuss their groundbreaking work. Our first 6 episodes are sponsored by IBM…

  Country:
  Genre: Personal (0.46)
  Industry: Information Technology (0.96)

Contextual Bandit with Missing Rewards

Bouneffouf, Djallel, Upadhyay, Sohini, Khazaeni, Yasaman

arXiv.org Artificial Intelligence

We consider a novel variant of the contextual bandit problem (i.e., the multi-armed bandit with side-information, or context, available to a decision-maker) where the reward associated with each context-based decision may not always be observed ("missing rewards"). This new problem is motivated by certain online settings including clinical trial and ad recommendation applications. In order to address the missing rewards setting, we propose to combine the standard contextual bandit approach with an unsupervised learning mechanism such as clustering. Unlike standard contextual bandit methods, by leveraging clustering to estimate missing reward, we are able to learn from each incoming event, even those with missing rewards. Promising empirical results are obtained on several real-life datasets.


Building ethically aligned AI

#artificialintelligence

This is especially true when AI systems tackle difficult problems whose solution cannot be accurately defined by a traditional rule-based approach but require the data-driven and/or learning approaches increasingly being used in AI. Indeed, data-driven AI systems, such as those using machine learning, are very successful in terms of accuracy and flexibility, and they can be very "creative" in solving a problem, finding solutions that could positively surprise humans and teach them innovative ways to resolve a challenge. However, creativity and freedom without boundaries can sometimes lead to undesired actions: the AI system could achieve its goal in ways that are not considered acceptable according to values and norms of the impacted community. Thus, there is a growing need to understand how to constrain the actions of an AI system by providing boundaries within which the system must operate. This is usually referred to as the "value alignment" problem, since such boundaries should model values and principles required for the specific AI application scenario.


Building ethically aligned AI

#artificialintelligence

The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving their goals. Thus, a certain level of freedom to choose the best path to a specific goal is necessary in making AI robust and flexible enough to be deployed successfully in real-life scenarios. This is especially true when AI systems tackle difficult problems whose solution cannot be accurately defined by a traditional rule-based approach but require the data-driven and/or learning approaches increasingly being used in AI. Indeed, data-driven AI systems, such as those using machine learning, are very successful in terms of accuracy and flexibility, and they can be very "creative" in solving a problem, finding solutions that could positively surprise humans and teach them innovative ways to resolve a challenge. However, creativity and freedom without boundaries can sometimes lead to undesired actions: the AI system could achieve its goal in ways that are not considered acceptable according to values and norms of the impacted community.